Performance analysis of Particle in Cell Electromagnetic code using Infiniband Interconnect

نویسنده

  • G. Singh
چکیده

Particle in Cell (PIC) codes are widely used in the simulation of many plasma related systems (E.g.: Laser plasma interactions, high power microwave sources). A detailed simulation of these systems requires parallel computing facility with faster CPUs with efficient interconnects. We have setup a 33 node Xeon (Dual socket and Dual core) cluster with double data rate Infiniband interconnect in addition to our existing 144 node Pentium-4 cluster with gigabit interconnect. The basic network performance parameters (latency and bandwidth) and the PIC code performance in 32 node Xeon cluster with double data rate Infiniband interconnect has been studied. The studies carried out using 2 cores per node and 4 cores per node are reported.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Intel® True Scale Fabric Architecture: Enhanced HPC Architecture and Performance

Key Findings infiniBand* Architectures – There are two types of InfiniBand architectures available today in the marketplace, the first being the traditional InfiniBand design, created as a channel interconnect for the data center. The latest InfiniBand architecture was built with HPC in mind. This enhanced HPC fabric offering is optimized for key interconnect performance factors, featuring MPI ...

متن کامل

Parallel Performance Studies for a Hyperbolic Test Problem

The performance of parallel computer code depends on an intricate interplay of the processors, the architecture of the compute nodes, their interconnect network, the numerical algorithm, and the scheduling policy used. This note considers a case study of a solver of a system of transient hyperbolic conservation laws which utilizes both point-to-point and collective communications between parall...

متن کامل

Throughput studies on an InfiniBand interconnect via all-to-all communications

Distributed-memory clusters are the most important type of parallel computer today, and they dominate the TOP500 list. The InfiniBand interconnect is the most popular network for distributed-memory compute clusters. Contention of communications across a switched network that connects multiple compute nodes in a distributed-memory cluster may seriously degrade performance of parallel code. This ...

متن کامل

Parallel Performance Studies for a Three-Species Application Problem on the Cluster tara

High performance parallel computing depends on the interaction of a number of factors including the processors, the architecture of the compute nodes, their interconnect network, and the numerical code. In this note, we present performance and scalability studies on the cluster tara using a well established parallelized code for a three-species application problem. This application problem requ...

متن کامل

Can NIC Memory in InfiniBand Benefit Communication Performance? — A Study with Mellanox Adapter

This paper presents a comprehensive micro-benchmark performance evaluation on using NIC memory in the Mellanox InfiniBand adapter. Three main benefits have been explored, including non-blocking and high performance host/NIC data movement, traffic reduction of the local interconnect, and avoidance of the local interconnect bottleneck. Two case studies have been carried out to show how these bene...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007